Time complexity

This is how scaling ability of algorithms are described.

Let’s assume we are dealing with algorithms that take a list of n items.
For example, an algorithm that just iterates the list and returns the greatest item.
This algorithm only needs to do a simple iteration to know the results, and if we double n, the execution time also doubles.

If we tried to describe the mathematical function ExecutionTime(n) of that algorithm, the equation should look something like this:

execution_time = n * constant where constant includes any computation in the algorithm that doesn’t change when increasing the size of the list (n).

We only care about how n directly affects the execution_time

The scalability of this equation depends proportionally on n, so to describe its complexity we use the Big O notation. In this example, its time complexity is O(n), and it tells us that the execution time will change proportionally to (n).

If the execution_time equation had multiple complexities, e.g. = 2n + n², we would only describe the biggest complexity (the one that increases faster), in this example it would be since it dominates more and more as n increases, so in Big O notation it would be O(n²)

Example

If an algorithm has a time complexity of O(n), and we increase the size of n by 10x, we should expect the execution time to also increase by 10x. But if the algorithm is O(n²), we should expect the execution time to increase by 100x instead, because 10² = 100

An example of an O(n²) algorithm:
Produce all the ordered combinations of pairs from a list of n items. E.g.: input = [A,B,C], which is size: 3
output = [[A,A], [A,B], [A,C], [B,A], [B,B], [B,C], [C,A], [C,B], [C,C]], which is size: 9 (3²)

Other time complexities

Other notations

Big O notation describes the upper bound of the time complexity.
There are other notations that describe the lower bound, or both bounds:

In general, these notations are called Asymptotic Notations

Further reading